functional theory
Can Large Language Models Adapt to Other Agents In-Context?
Riemer, Matthew, Ashktorab, Zahra, Bouneffouf, Djallel, Das, Payel, Liu, Miao, Weisz, Justin D., Campbell, Murray
As the research community aims to build better AI assistants that are more dynamic and personalized to the diversity of humans that they interact with, there is increased interest in evaluating the theory of mind capabilities of large language models (LLMs). Indeed, several recent studies suggest that LLM theory of mind capabilities are quite impressive, approximating human-level performance. Our paper aims to rebuke this narrative and argues instead that past studies were not directly measuring agent performance, potentially leading to findings that are illusory in nature as a result. We draw a strong distinction between what we call literal theory of mind i.e. measuring the agent's ability to predict the behavior of others and functional theory of mind i.e. adapting to agents in-context based on a rational response to predictions of their behavior. We find that top performing open source LLMs may display strong capabilities in literal theory of mind, depending on how they are prompted, but seem to struggle with functional theory of mind -- even when partner policies are exceedingly simple. Our work serves to highlight the double sided nature of inductive bias in LLMs when adapting to new situations. While this bias can lead to strong performance over limited horizons, it often hinders convergence to optimal long-term behavior.
- North America > United States > New Jersey (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Leisure & Entertainment > Games (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.51)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents > Agent Societies (0.45)
Perfecting Liquid-State Theories with Machine Intelligence
Recent years have seen a significant increase in the use of machine intelligence for predicting electronic structure, molecular force fields, and the physicochemical properties of various condensed systems. However, substantial challenges remain in developing a comprehensive framework capable of handling a wide range of atomic compositions and thermodynamic conditions. This perspective discusses potential future developments in liquid-state theories leveraging on recent advancements of functional machine learning. By harnessing the strengths of theoretical analysis and machine learning techniques including surrogate models, dimension reduction and uncertainty quantification, we envision that liquid-state theories will gain significant improvements in accuracy, scalability and computational efficiency, enabling their broader applications across diverse materials and chemical systems.
- North America > United States > California > Santa Barbara County > Santa Barbara (0.14)
- North America > United States > California > Riverside County > Riverside (0.14)
- Europe > Portugal > Braga > Braga (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.46)
Variational principle to regularize machine-learned density functionals: the non-interacting kinetic-energy functional
del Mazo-Sevillano, P., Hermann, J.
Practical density functional theory (DFT) owes its success to the groundbreaking work of Kohn and Sham that introduced the exact calculation of the non-interacting kinetic energy of the electrons using an auxiliary mean-field system. However, the full power of DFT will not be unleashed until the exact relationship between the electron density and the non-interacting kinetic energy is found. Various attempts have been made to approximate this functional, similar to the exchange--correlation functional, with much less success due to the larger contribution of kinetic energy and its more non-local nature. In this work we propose a new and efficient regularization method to train density functionals based on deep neural networks, with particular interest in the kinetic-energy functional. The method is tested on (effectively) one-dimensional systems, including the hydrogen chain, non-interacting electrons, and atoms of the first two periods, with excellent results. For the atomic systems, the generalizability of the regularization method is demonstrated by training also an exchange--correlation functional, and the contrasting nature of the two functionals is discussed from a machine-learning perspective.
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication
Keenan, Bernard, Sokol, Kacper
Over the past decade explainable artificial intelligence has evolved from a predominantly technical discipline into a field that is deeply intertwined with social sciences. Insights such as human preference for contrastive -- more precisely, counterfactual -- explanations have played a major role in this transition, inspiring and guiding the research in computer science. Other observations, while equally important, have received much less attention. The desire of human explainees to communicate with artificial intelligence explainers through a dialogue-like interaction has been mostly neglected by the community. This poses many challenges for the effectiveness and widespread adoption of such technologies as delivering a single explanation optimised according to some predefined objectives may fail to engender understanding in its recipients and satisfy their unique needs given the diversity of human knowledge and intention. Using insights elaborated by Niklas Luhmann and, more recently, Elena Esposito we apply social systems theory to highlight challenges in explainable artificial intelligence and offer a path forward, striving to reinvigorate the technical research in this direction. This paper aims to demonstrate the potential of systems theoretical approaches to communication in understanding problems and limitations of explainable artificial intelligence.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- (5 more...)
- Law (1.00)
- Health & Medicine (1.00)
- Government (1.00)
D4FT: A Deep Learning Approach to Kohn-Sham Density Functional Theory
Li, Tianbo, Lin, Min, Hu, Zheyuan, Zheng, Kunhao, Vignale, Giovanni, Kawaguchi, Kenji, Neto, A. H. Castro, Novoselov, Kostya S., Yan, Shuicheng
Kohn-Sham Density Functional Theory (KS-DFT) has been traditionally solved by the Self-Consistent Field (SCF) method. Behind the SCF loop is the physics intuition of solving a system of non-interactive single-electron wave functions under an effective potential. In this work, we propose a deep learning approach to KS-DFT. First, in contrast to the conventional SCF loop, we propose to directly minimize the total energy by reparameterizing the orthogonal constraint as a feed-forward computation. We prove that such an approach has the same expressivity as the SCF method, yet reduces the computational complexity from O(N^4) to O(N^3). Second, the numerical integration which involves a summation over the quadrature grids can be amortized to the optimization steps. At each step, stochastic gradient descent (SGD) is performed with a sampled minibatch of the grids. Extensive experiments are carried out to demonstrate the advantage of our approach in terms of efficiency and stability. In addition, we show that our approach enables us to explore more complex neural-based wave functions.
- Asia > Singapore (0.04)
- North America > United States > Minnesota (0.04)
Evolving symbolic density functionals
Ma, He, Narayanaswamy, Arunachalam, Riley, Patrick, Li, Li
Systematic development of accurate density functionals has been a decades-long challenge for scientists. Despite the emerging application of machine learning (ML) in approximating functionals, the resulting ML functionals usually contain more than tens of thousands parameters, which makes a huge gap in the formulation with the conventional human-designed symbolic functionals. We propose a new framework, Symbolic Functional Evolutionary Search (SyFES), that automatically constructs accurate functionals in the symbolic form, which is more explainable to humans, cheaper to evaluate, and easier to integrate to existing density functional theory codes than other ML functionals. We first show that without prior knowledge, SyFES reconstructed a known functional from scratch. We then demonstrate that evolving from an existing functional $\omega$B97M-V, SyFES found a new functional, GAS22 (Google Accelerated Science 22), that performs better for the majority of molecular types in the test set of Main Group Chemistry Database (MGCDB84). Our framework opens a new direction in leveraging computing power for the systematic development of symbolic density functionals.
- North America > United States > Minnesota (0.04)
- North America > United States > California > Santa Clara County > Mountain View (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Ukraine > Kharkiv Oblast > Kharkiv (0.04)
Machine learning universal bosonic functionals
The one-body reduced density matrix γ plays a fundamental role in describing and predicting quantum features of bosonic systems, such as Bose-Einstein condensation. The recently proposed reduced density matrix functional theory for bosonic ground states establishes the existence of a universal functional F[γ] that recovers quantum correlations exactly. Based on a decomposition of γ, we have developed a method to design reliable approximations for such universal functionals: Our results suggest that for translational invariant systems the constrained search approach of functional theories can be transformed into an unconstrained problem through a parametrization of a Euclidian space. This simplification of the search approach allows us to use standard machine learning methods to perform a quite efficient computation of both F[γ] and its functional derivative. For the Bose-Hubbard model, we present a comparison between our approach and the quantum Monte Carlo method.
Artificial Intelligence Comes to Battery Design
DOE/Argonne National Laboratory researchers have turned to the power of machine learning and artificial intelligence to dramatically accelerate battery discovery. The press release likens designing new batteries with the best molecular building blocks for battery components to trying to create a recipe for a new kind of cake, when you have billions of potential ingredients. The challenge involves determining which ingredients work best together – or, more simply, produce an edible (or, in the case of batteries, a safe) product. But even with state-of-the-art supercomputers, scientists cannot precisely model the chemical characteristics of every molecule that could prove to be the basis of a next-generation battery material. As described in two new papers, the Argonne researchers first created a highly accurate database of roughly 133,000 small organic molecules that could form the basis of battery electrolytes.
- Energy > Energy Storage (1.00)
- Electrical Industrial Apparatus (1.00)
Building a better battery with machine learning and Artificial Intelligence - ET CIO
Washington D.C.: With the help of machine learning and artificial intelligence researchers are accelerating the power of batteries. Researchers at the U.S. Department of Energy's (DOE) Argonne National Laboratory have turned to the power of machine learning and artificial intelligence to dramatically accelerate the process of battery discovery, according to the study published in -- Chemical Science. As described in two new papers, Argonne researchers first created a highly accurate database of roughly 133,000 small organic molecules that could form the basis of battery electrolytes. To do so, they used a computationally intensive model called G4MP2. This collection of molecules, however, represented only a small subset of 166 billion larger molecules that scientists wanted to probe for electrolyte candidates.
- Energy > Energy Storage (0.89)
- Government > Regional Government > North America Government > United States Government (0.57)
Building a better battery with machine learning and artificial intelligence
With the help of machine learning and artificial intelligence researchers are accelerating the power of batteries. Researchers at the U.S. Department of Energy's (DOE) Argonne National Laboratory have turned to the power of machine learning and artificial intelligence to dramatically accelerate the process of battery discovery, according to the study published in -- Chemical Science. As described in two new papers, Argonne researchers first created a highly accurate database of roughly 133,000 small organic molecules that could form the basis of battery electrolytes. To do so, they used a computationally intensive model called G4MP2. This collection of molecules, however, represented only a small subset of 166 billion larger molecules that scientists wanted to probe for electrolyte candidates.
- Energy > Energy Storage (0.89)
- Government > Regional Government > North America Government > United States Government (0.57)